I can only guess that your bucket is somewhat big, indexing all filenames takes time. In one request we can get up to 1000 filenames, so if you have e.g. 1M files, we would need to issue 1k request on after another.
The other bit is that current indexing speed isn't greatest and once we improve that the overall cache rebuild process should speed up.
As a matter of test, you could try opening the drawer menu on the left side and then try building storage statistics (number of MB used and files in the bucket). This follows similar approach, but instead of indexing results (which is slower) it only summarize the size value.
It would at least prove that we're able to iterate over your dataset at least.